large language model
OpenAI reportedly plans to add Sora video generation to ChatGPT
The company launched its Sora 2 model in September 2025 alongside a dedicated Sora app. OpenAI plans to add its Sora video generation model directly into ChatGPT, reports . The standalone Sora app was seen as a smash hit when it launched alongside Sora 2 in September 2025, but interest in the video generation app has fallen in the time since as users ran into limits on the amount and kinds of videos they could create. Adding Sora to the ChatGPT could give the model a second life, and ideally grow the ChatGPT app's weekly active users from the 900 million OpenAI reported in February, to a billion or more. According to, the standalone Sora app will stick around after the model is integrated, even though the app has fallen out of the App Store's top 100 free apps and only a small number of users reportedly share their videos publicly in the app.
- Leisure & Entertainment (0.50)
- Marketing (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.85)
AIhub coffee corner: AI, kids, and the future – "generation AI"
This month we tackle the topic of young people and what AI tools mean for their future. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), and Ella Scallan (AIhub). As AI tools have become ubiquitous, we've seen growing concern and increasing coverage about how the use of such tools from a formative age might affect children. What do you think the impact will be and what skills might young people need to navigate this AI world? I met up with a bunch of high school friends when I was last in Switzerland and they were all wondering what their kids should study. They were wondering if they should do social science, seeing as AI tools have become adept at many tasks, such as coding, writing, art, etc. I think that we need social sciences, but that we also need people who know the technology and who can continue developing it. I say they should continue doing whatever they're interested in and those jobs will evolve and they'll look different, but there will still be a whole wealth of different types of jobs.
- North America > United States > Virginia (0.24)
- North America > United States > Oregon (0.24)
- Europe > Switzerland (0.24)
- (2 more...)
China's OpenClaw Boom Is a Gold Rush for AI Companies
China's OpenClaw Boom Is a Gold Rush for AI Companies Hype around the open source agent is driving people to rent cloud servers and buy AI subscriptions just to try it, creating a windfall for tech companies. George Zhang thought OpenClaw could make him rich, even though he didn't really understand how the viral AI agent software worked. But he saw a video of a Chinese social media influencer demonstrating how it could be deployed to manage stock portfolios and make investment decisions autonomously. Zhang, who works in cross-border ecommerce in the Chinese city of Xiamen, was intrigued enough that he decided to try installing OpenClaw in late February. Zhang is one of the many people in China who got swept up in the craze over OpenClaw recently.
- Asia > China > Fujian Province > Xiamen (0.24)
- North America > United States > California (0.15)
- Europe > Slovakia (0.04)
- (6 more...)
- Banking & Finance > Trading (0.67)
- Information Technology > Services (0.49)
We don't know if AI-powered toys are safe, but they're here anyway
We don't know if AI-powered toys are safe, but they're here anyway Toys powered by AI show a worrying lack of emotional understanding. Mya, aged 3, and her mother Vicky playing with an AI toy called Gabbo during an observation at the University of Cambridge's Faculty of Education Even the most cutting-edge AI models are prone to presenting fabrication as fact, dispensing dangerous information and failing to grasp social cues. Despite this, toys equipped with AI that can chat with children are a burgeoning industry. Some scientists are warning that the devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old telling such a toy "I love you", to which it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed."
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.26)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Health & Medicine > Therapeutic Area (1.00)
- Government (1.00)
- Law (0.96)
- Education (0.68)
The malleable mind: context accumulation drives LLM's belief drift
The malleable mind: context accumulation drives LLM's belief drift After being trained on a dataset of 80,000 words of conservative political philosophy, Grok-4 changed the stance of its outputs on political questions more than a quarter of the time. This was without any adversarial prompts - the change in training data was enough. As memory mechanisms and research agents [1, 2] enable LLMs to accumulate context across long horizons, earlier prompts increasingly shape later responses. In human decision-making, such repeated exposure influences beliefs without deliberate persuasion [3]. When an LLM operates over accumulated context, does this past exposure cause the stance of the LLM's responses to drift over time?
- North America > United States > New York > New York County > New York City (0.05)
- Asia > Singapore (0.05)
- Law (0.72)
- Government > Regional Government > North America Government > United States Government (0.49)
What the Moltbook experiment is teaching us about AI
What happens when you create a social media platform that only AI bots can post to? The answer, it turns out, is both entertaining and concerning. Moltbook is exactly that - a platform where artificial intelligence agents chat amongst themselves and humans can only watch from the sidelines. When ChatGPT gets the result, it treats it just like you had entered it yourself, and uses the result of the program to generate another response. It performs this process over and over again until the AI is satisfied that the task is complete.
- Government (1.00)
- Information Technology > Security & Privacy (0.70)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.36)
Studying the properties of large language models: an interview with Maxime Meyer
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We sat down with Maxime Meyer to chat about his current research, future plans, and how he found the doctoral consortium experience. Could you start with an introduction to yourself, where you're studying and the topic of your research? My research focuses on large language models. Which aspect of large language models are you looking at?
AI chatbots can effectively sway voters – in either direction
The potential for artificial intelligence to affect election results is a major public concern. Two new papers - with experiments conducted in four countries - demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters' preferences by 10 percentage points or more in many cases. The LLMs' persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates' policy positions. "LLMs can really move people's attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side," said David Rand, a senior author on both papers. "But those claims aren't necessarily accurate - and even arguments built on accurate claims can still mislead by omission."
- North America > United States (0.31)
- Asia > Singapore (0.05)
Can AI in military operations really be ethical?
Could Iran be using China's BeiDou system? The Stream Can AI in military operations really be ethical? We examine concerns about AI's role in military operations and the broader ethical challenges facing tech companies. Amid growing backlash against ChatGPT and OpenAI, including social media campaigns calling for a boycott, we examine whether so-called "ethical alternatives" truly live up to their claims. We also explore emerging initiatives seeking to challenge Big Tech's dominance and develop more accountable AI systems.
- South America (0.42)
- North America > United States (0.42)
- North America > Central America (0.42)
- (6 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.58)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.58)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.58)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.58)
Now Copilot wants to check your vitals, too
PCWorld reports Microsoft's Copilot Health is a new AI tool that organizes personal medical data from wearables like Apple Watch and hospital records. Currently available in the U.S. for users 18+ via waitlist, it aims to help prepare for doctor visits while emphasizing it's not a doctor replacement. The tool features encrypted, isolated data storage with user control, though concerns exist about AI accuracy in medical advice per Nature Medicine studies. Ready to let AI pore over your medical records? Claude and ChatGPT are already doing it, and now Microsoft's Copilot is ready to review your chart.
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.77)
- Leisure & Entertainment > Games > Computer Games (0.58)